AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Trained on 1.4B data

# Trained on 1.4B data

CLIP ViT B 32 256x256 DataComp S34b B86k
MIT
This is a CLIP ViT-B/32 model trained on the DataComp-1B dataset using the OpenCLIP framework at 256x256 resolution, primarily for zero-shot image classification and image-text retrieval tasks.
Text-to-Image
C
laion
4,332
8
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase